In [1]:
__author__ = 'Alice Jacques <alice.jacques@noirlab.edu>, NOIRLab Astro Data Lab Team <datalab@noirlab.edu>' 
__version__ = '20210110' 
__datasets__ = ['ls_dr8','sdss_dr16','gaia_dr2','des_dr1','smash_dr2','unwise_dr1','allwise'] 
__keywords__ = ['crossmatch','joint query','mydb','vospace','image cutout']

How to use the pre-crossmatched tables at Astro Data Lab

by Alice Jacques and the NOIRLab Astro Data Lab Team

Goals

  • Learn how to use a pre-crossmatched table to do a joint query on two Data Lab data sets
  • Learn how to do an efficient crossmatch of a user-provided data table against a Data Lab pre-crossmatched table

Summary

Crossmatch table naming template

The crossmatch tables at Astro Data Lab are named as follows:

schema1.xNpN__table1__schema2__table2

where the N in NpN encode the numerical value of the crossmatching radius (since dots '.' are not allowed in table names).

Example:

ls_dr8.x1p5__tractor_primary_n__gaia_dr2__gaia_source

is a crossmatch table (indicated by the leading x), located in the ls_dr8 schema, and it crossmatches the ls_dr8.tractor_primary_n table with the gaia_dr2.gaia_source table (which lives in the gaia_dr2 schema) within a 1.5 arcseconds radius ('1p5') .

This is admittedly long, but clean, consistent, and most importantly, parsable. The use of double-underscores '__' is to distinguish from single underscores often used in schema and table names.

Columns in crossmatch tables

All crossmatch tables shall be minimalist, i.e. have only these columns: id1,ra1,dec1,id2,ra2,dec2,distance. Column descriptions in the crossmatch table shall contain the original column names in parentheses (makes it parsable).

For example:

ls_dr8.x1p5__tractor_primary_n__gaia_dr2__gaia_source

Column Description Datatype
id1 ID in left/first table (ls_id) BIGINT
ra1 Right ascension in left/first table (ra) DOUBLE
dec1 Declination in left/first table (dec) DOUBLE
id2 ID in right/second table (source_id) BIGINT
ra2 Right ascension in right/second table (ra) DOUBLE
dec2 Declination in right/second table (dec) DOUBLE
distance Distance between ra1,dec1 and ra2,dec2 (arcsec) DOUBLE

Datatypes in crossmatch tables

  • The column data types in a crossmatch table for columns id1 and id2 shall be retained from the mother tables. The example above, BIGINT, is valid in many cases, but need not be for all data sets.
  • The data types for columns ra1, dec1, ra2, dec2 shall be DOUBLE, which they usually will be anyway.
  • The column distance can be either REAL or DOUBLE.

Overview

  • The following 5 data sets are considered the main reference tables and are crossmatched against all data sets (if there is sky overlap) and when a new data set is ingested:
    • latest gaia_drN.gaia_source
    • latest nsc_drN.object
    • latest unwise_drN.object
    • allwise.source
    • latest sdss_drN.specobj
  • "Crossmatch" means for now "single nearest neighbor" (and this is the current mode at Data Lab).
  • Object tables only, not single epoch measurements or metadata tables.
  • For every crossmatch table with table1 as the left/first table and table2 as the right/second table, there exists a corresponding crossmatch table with table2 as the left/first table and table1 as the right/second table.
    • For example, allwise.x1p5__source__des_dr1__main and des_dr1.x1p5__main__allwise__source.

The list of available crossmatch tables can be viewed on our query interface here under their respective schema.

Disclaimer & attribution

If you use this notebook for your published science, please acknowledge the following:

Imports and setup

In [2]:
# std lib
from getpass import getpass

# 3rd party
from astropy.utils.data import download_file  #import file from URL
from matplotlib.ticker import NullFormatter
import pylab as plt
import matplotlib
%matplotlib inline

# Data Lab
from dl import authClient as ac, queryClient as qc, storeClient as sc
from dl.helpers.utils import convert # converts table to Pandas dataframe object

Authentication

Much of the functionality of Data Lab can be accessed without explicitly logging in (the service then uses an anonymous login). But some capacities, for instance saving the results of your queries to your virtual storage space, require a login (i.e. you will need a registered user account).

If you need to log in to Data Lab, issue this command, and respond according to the instructions:

In [3]:
#ac.login(input("Enter user name: (+ENTER) "),getpass("Enter password: (+ENTER) "))
ac.whoAmI()
Out[3]:
'demo00'

Accessing the pre-crossmatched tables

We can use Data Lab's Query Client to access the pre-crossmatched tables hosted by Data Lab. First let's get a total count of the number of objects (nrows) in SDSS DR16 that are also in LS DR8:

In [4]:
%%time
query="SELECT nrows FROM tbl_stat WHERE schema='sdss_dr16' and tbl_name='x1p5__specobj__ls_dr8__tractor_primary'"

# Call query manager
response = qc.query(sql=query)
print(response)
nrows
4542857

CPU times: user 26.9 ms, sys: 3.43 ms, total: 30.3 ms
Wall time: 166 ms

Now let's print just the first 100 rows:

In [5]:
query = "SELECT * FROM sdss_dr16.x1p5__specobj__ls_dr8__tractor_primary LIMIT 100"
response = qc.query(sql=query)
result = convert(response) # convert the table into a Pandas dataframe object
result
Out[5]:
id1 ra1 dec1 id2 ra2 dec2 distance
0 3384465917919389696 287.22826 48.064735 8797230351783516 287.228165 48.064735 0.000063
1 3384466192797296640 287.44889 48.229698 8797230414957399 287.448870 48.229697 0.000014
2 3384462344506599424 287.38750 48.168965 8797230414890143 287.387517 48.168933 0.000034
3 3384463718896134144 287.69779 48.382804 8797230477803600 287.697861 48.382752 0.000070
4 3384465093285668864 287.54718 48.407654 8797230477804882 287.547174 48.407548 0.000106
... ... ... ... ... ... ... ...
95 3384471690355435520 287.70990 48.888661 8797230602453456 287.709937 48.888637 0.000034
96 3384469491332179968 287.66389 48.944252 8797230602454731 287.663800 48.944491 0.000247
97 3384480486448457728 287.22115 48.827232 8797230540199804 287.221105 48.827183 0.000057
98 3384477737669388288 287.29420 48.927487 8797230602388155 287.294186 48.927487 0.000009
99 3384470590843807744 287.46812 49.027895 8797230602391658 287.468139 49.027900 0.000013

100 rows × 7 columns

Writing a JOIN query

In order to extract only the relevant columns pertaining to our science question from multiple data tables, we may write a query that uses a JOIN statement. There are 4 main types of JOIN statements that we could use, and which one we decide to choose depends on how we want the information to be extracted.

  1. (INNER) JOIN: Returns rows that have matching values in both tables
  2. LEFT (OUTER) JOIN: Returns all rows from the left table, and the matched rows from the right table
  3. RIGHT (OUTER) JOIN: Returns all rows from the right table, and the matched rows from the left table
  4. FULL (OUTER) JOIN: Returns all rows when there is a match in either left or right table

Take a moment to look over the figure below outlining the various JOIN statement types.
NOTE: the default JOIN is an INNER JOIN.

JOIN LATERAL

In nearest neighbor crossmatch queries, we use JOIN LATERAL, which is like a SQL foreach loop that will iterate over each row in a result set and evaluate a subquery using that row as a parameter.

Joint query of LS and SDSS catalogs

Here we will examine spectroscopic redshifts from SDSS DR16 and photometry from LS DR8. The two crossmatch tables related to these two catalogs are: ls_dr8.x1p5__tractor__sdss_dr16__specobj and sdss_dr16.x1p5__specobj__ls_dr8__tractor_primary. The choice of which of these two crossmatch tables to use should be based on the science question being posed. For instance, the question 'how does a galaxy's structure change with redshift?' is dependent on the redshift values obtained from SDSS DR16, so we should use the crossmatch table that has SDSS DR16 as the first table. So, the relevant information we want from our 3 tables of interest for this example are:

  1. "X" = sdss_dr16.x1p5__specobj__ls_dr8__tractor_primary
    • ra1 (RA of sdss object)
    • dec1 (Dec of sdss object)
  2. "S" = sdss_dr16.specobj
    • z (redshift)
    • class (spectroscopic class: GALAXY, QSO, or STAR)
    • veldisp (velocity dispersion)
    • veldisperr (error in velocity dispersion)
  3. "L" = ls_dr8.tractor
    • ra (RA of ls object)
    • dec (Dec of ls object)
    • type (morphological model: PSF=stellar, REX=round exponential galaxy, DEV=deVauc, EXP=exponential, COMP=composite, DUP=Gaia source fit by different model)
    • g_r (computed g-r color)
    • r_z (computed r-z color)

Write the query

Now that we know what we want and where we want it from, let's write the query and then print the results on screen. Here we use two join statements: the first will search in the SDSS DR16 specobj table for rows that have the same SDSS id value (specobjid) as in the pre-crossmatched table (id1) and retrieve the desired columns from the SDSS DR16 specobj table. The second will search in the LS DR8 tractor table for rows that have the same LS id value (ls_id) as in the pre-crossmatched table (id2) and retrieve the desired columns from the LS DR8 tractor table.

In [6]:
query = ("""SELECT 
           X.ra1 as ra_sdss,X.dec1 as dec_sdss,
           S.z,S.class,S.veldisp,S.veldisperr,
           L.ra as ra_ls,L.dec as dec_ls,L.type,L.g_r,L.r_z
         FROM sdss_dr16.x1p5__specobj__ls_dr8__tractor_primary as X 
         JOIN sdss_dr16.specobj as S ON X.id1 = S.specobjid 
         JOIN ls_dr8.tractor AS L ON X.id2 = L.ls_id
         WHERE X.ra1 BETWEEN %s and %s and X.dec1 BETWEEN %s and %s
         """) %(110,200,7.,40.)  #large region
print(query)
SELECT 
           X.ra1 as ra_sdss,X.dec1 as dec_sdss,
           S.z,S.class,S.veldisp,S.veldisperr,
           L.ra as ra_ls,L.dec as dec_ls,L.type,L.g_r,L.r_z
         FROM sdss_dr16.x1p5__specobj__ls_dr8__tractor_primary as X 
         JOIN sdss_dr16.specobj as S ON X.id1 = S.specobjid 
         JOIN ls_dr8.tractor AS L ON X.id2 = L.ls_id
         WHERE X.ra1 BETWEEN 110 and 200 and X.dec1 BETWEEN 7.0 and 40.0
         
In [7]:
%%time
df = qc.query(sql=query,fmt='pandas')
df
CPU times: user 3.2 s, sys: 1.31 s, total: 4.5 s
Wall time: 14.2 s
Out[7]:
ra_sdss dec_sdss z class veldisp veldisperr ra_ls dec_ls type g_r r_z
0 123.12650 39.993317 0.124905 GALAXY 221.7120 10.6741 123.126449 39.993302 DEV 1.153230 0.772179
1 123.36125 39.996015 -0.000118 STAR 0.0000 0.0000 123.361280 39.995986 PSF 1.564990 1.792350
2 123.23940 39.990355 0.067814 GALAXY 80.2651 13.9739 123.239365 39.990334 EXP 0.747831 0.448318
3 123.18048 39.935782 0.067945 GALAXY 102.3340 14.9775 123.180441 39.935786 EXP 0.899620 0.669617
4 123.25379 39.959822 0.000056 STAR 0.0000 0.0000 123.253686 39.959713 PSF 1.730960 2.775590
... ... ... ... ... ... ... ... ... ... ... ...
1122603 130.53411 9.252107 0.299707 GALAXY 164.3400 27.1577 130.534116 9.252130 COMP 1.510160 0.943209
1122604 130.66395 9.108777 0.000608 STAR 0.0000 0.0000 130.663952 9.108769 PSF 0.310471 0.075777
1122605 130.63483 9.169138 2.204500 QSO 0.0000 0.0000 130.634859 9.169143 PSF 0.097034 0.493404
1122606 130.64998 9.145504 0.176922 QSO 0.0000 0.0000 130.649974 9.145496 PSF 0.239815 0.326403
1122607 130.57989 9.258228 0.287373 GALAXY 200.5690 13.9157 130.579908 9.258234 DEV 1.553710 0.851355

1122608 rows × 11 columns

Saving results to VOSpace

VOSpace is a convenient storage space for users to save their work. It can store any data or file type. We can save the results from the same query to our virtual storage space:

In [8]:
response = qc.query(sql=query,fmt='csv',out='vos://testresult.csv')

Let's ensure the file was saved in VOSpace:

In [9]:
sc.ls(name='vos://testresult.csv')
Out[9]:
'testresult.csv'

Now let's remove the file we just saved to VOSpace:

In [10]:
sc.rm (name='vos://testresult.csv')
Out[10]:
'OK'

Let's ensure the file was removed from VOSpace:

In [11]:
sc.rm (name='vos://testresult.csv')
Out[11]:
'A Node does not exist with the requested URI.'

Saving results to MyDB

MyDB is a useful OS remote per-user relational database that can store data tables. Furthermore, the results of queries can be directly saved to MyDB, as we show in the following example:

In [12]:
response = qc.query(sql=query, fmt='csv', out='mydb://testresult')

Ensure the table has been saved to MyDB by calling the mydb_list() function, which will list all tables currently in a user's MyDB:

In [13]:
print(qc.mydb_list(),"\n")
gaia_sample,created:2021-01-10 16:30:50 MST
gaia_sample_xmatch,created:2021-01-10 16:30:51 MST
gals,created:2021-01-10 16:38:01 MST
testresult,created:2021-01-10 16:45:09 MST
 

Now let's drop the table from our MyDB.

In [14]:
qc.mydb_drop('testresult')
Out[14]:
'OK'

Ensure it has been removed by calling the mydb_list() function again:

In [15]:
print(qc.mydb_list(),"\n")
gaia_sample,created:2021-01-10 16:30:50 MST
gaia_sample_xmatch,created:2021-01-10 16:30:51 MST
gals,created:2021-01-10 16:38:01 MST
 

Crossmatch a user-provided data table and a pre-crossmatched table

We can construct a query to run a crossmatch in the database using the q3c_join() function, which identifies all matching objects within a specified radius in degrees (see details on using Q3C functions). For this example, we will search only for the single nearest neighbor. For different examples of crossmatching, see our How to crossmatch tables notebook.

First, let's query a small selection of sample data from the Data Lab database and store it in MyDB as gaia_sample. This will act as our "user-provided table".

In [16]:
query = """SELECT source_id,ra,dec,parallax,pmra,pmdec 
            FROM gaia_dr2.gaia_source 
            WHERE ra<200 AND ra>124 AND random_id<10 
            LIMIT 70000"""
print(query)
SELECT source_id,ra,dec,parallax,pmra,pmdec 
            FROM gaia_dr2.gaia_source 
            WHERE ra<200 AND ra>124 AND random_id<10 
            LIMIT 70000
In [17]:
%%time
response = qc.query(sql=query,out='mydb://gaia_sample',drop=True)
CPU times: user 21.1 ms, sys: 3.71 ms, total: 24.8 ms
Wall time: 3.95 s

Write a crossmatch query

Next let's crossmatch our gaia_sample table with Data Lab's pre-crossmatched table between SMASH DR2 and allWISE smash_dr2.x1p5__object__allwise__source. We'll write our crossmatch query using the q3c_join() function as well as the q3c_dist() function, searching for the nearest neighbor within a 1.5 arcsec radius (which must be converted into degrees for the query, so we divide by 3600.0). We will then save it in MyDB as gaia_sample_xmatch.

In [18]:
%%time
qu = """SELECT
        G.source_id,ss.id1,ss.id2,G.ra,G.dec,ss.ra1,ss.dec1,ss.ra2,ss.dec2,
        (q3c_dist(G.ra,G.dec,ss.ra1,ss.dec1)*3600.0) as dist_arcsec
        FROM mydb://gaia_sample AS G
        JOIN LATERAL (
            SELECT S.id1,S.id2,S.ra1,S.dec1,S.ra2,S.dec2
            FROM 
                smash_dr2.x1p5__object__allwise__source AS S
            WHERE 
                q3c_join(G.ra,G.dec,S.ra1,S.dec1, 1.5/3600.0)
            ORDER BY
                q3c_dist(G.ra,G.dec,S.ra1,S.dec1)
            ASC LIMIT 1
            ) AS ss ON true
    """
resp = qc.query(sql=qu,out='mydb://gaia_sample_xmatch',drop=True)
CPU times: user 27.9 ms, sys: 345 µs, total: 28.3 ms
Wall time: 1.92 s

We can query the newly created table from MyDB and convert it into a Pandas Dataframe object in order to print it on screen:

In [19]:
query = "SELECT * FROM mydb://gaia_sample_xmatch"
df = qc.query(sql=query,fmt='pandas')
df
Out[19]:
source_id id1 id2 ra dec ra1 dec1 ra2 dec2 dist_arcsec
0 5205696522501120512 Field80.880606 1482074301351023387 150.869988 -74.496170 150.869990 -74.496177 150.869098 -74.496177 0.023617
1 5205674807146027008 Field80.61312 1535074301351011624 151.032199 -74.745129 151.032198 -74.745131 151.032752 -74.745275 0.008880
2 6141410299609363840 Field127.835819 1993039401351059307 199.455692 -38.742563 199.455696 -38.742564 199.455683 -38.742551 0.010309
3 5467823633613278720 Field85.328869 1577028801351061337 157.482148 -28.132586 157.482144 -28.132586 157.482181 -28.132587 0.013867
4 5659712743651975424 Field76.497391 1486024301351024709 149.263676 -24.547096 149.263677 -24.547099 149.263713 -24.547078 0.011663
... ... ... ... ... ... ... ... ... ... ...
637 5388501192592385536 Field91.325126 1638042501351015781 163.258046 -42.610578 163.258049 -42.610577 163.258049 -42.610625 0.009637
638 5459798298242774016 Field77.1011912 1493033401351048226 149.796546 -33.028633 149.796547 -33.028635 149.796653 -33.028629 0.005438
639 6152131053375898496 Field123.257555 1918040901351043838 191.736605 -40.442602 191.736609 -40.442599 191.736650 -40.442600 0.016839
640 5441173017945297920 Field163.329494 1591037901351024901 159.794886 -38.270692 159.794889 -38.270695 159.794912 -38.270608 0.012900
641 5390277174450667520 Field165.440845 1661040901351015490 165.908202 -41.241788 165.908207 -41.241784 165.908317 -41.241735 0.017132

642 rows × 10 columns

Write the joint query

Now we can write a query using the JOIN statement in order to extract the columns we want from our tables of interest. Just as in the previous section, let's first make an outline of which tables we'd like to extract columns from.

  1. "X" = mydb://gaia_sample_xmatch
    • source_id (source id from gaia dr2)
    • id1 (source id from smash dr1)
    • id2 (source id from allwise)
    • ra (RA value from gaia dr2)
    • dec (Dec value from gaia dr2)
  2. "s" = smash_dr2.object
    • gmag (weighted-avarage, calibrated g-band magnitude, 99.99 if no detection)
    • rmag (weighted-avarage, calibrated r-band magnitude, 99.99 if no detection)
    • zmag (weighted-avarege, calibrated z-band magnitude, 99.99 if no detection)
  3. "a" = allwise.source
    • w1mpro (W1 magnitude measured with profile-fitting photometry)
    • w2mpro (W2 magnitude measured with profile-fitting photometry)
    • w3mpro (W3 magnitude measured with profile-fitting photometry)
  4. "g" = mydb://gaia_sample
    • parallax
    • pmra (proper motion in right ascension direction)
    • pmdec (proper motion in declination direction)
In [20]:
query = ("""SELECT 
           X.source_id,X.id1,X.id2,X.ra,X.dec,
           s.gmag,s.rmag,s.zmag,
           a.w1mpro,a.w2mpro,a.w3mpro,
           g.parallax,g.pmra,g.pmdec
         FROM mydb://gaia_sample_xmatch as X 
         JOIN smash_dr2.object as s ON X.id1 = s.id 
         JOIN allwise.source AS a ON X.id2 = a.cntr
         JOIN mydb://gaia_sample AS g ON X.source_id = g.source_id
         """)
print(query)
SELECT 
           X.source_id,X.id1,X.id2,X.ra,X.dec,
           s.gmag,s.rmag,s.zmag,
           a.w1mpro,a.w2mpro,a.w3mpro,
           g.parallax,g.pmra,g.pmdec
         FROM mydb://gaia_sample_xmatch as X 
         JOIN smash_dr2.object as s ON X.id1 = s.id 
         JOIN allwise.source AS a ON X.id2 = a.cntr
         JOIN mydb://gaia_sample AS g ON X.source_id = g.source_id
         
In [21]:
df = qc.query(sql=query,fmt='pandas')
df
Out[21]:
source_id id1 id2 ra dec gmag rmag zmag w1mpro w2mpro w3mpro parallax pmra pmdec
0 6131589629253634304 Field168.1156764 1853045501351008859 185.012314 -45.934668 19.4154 18.3814 17.7553 15.931 16.042 13.054 0.158196 -9.417819 -4.594620
1 5441239847637128832 Field163.1279533 1587039401351055638 158.835349 -38.680901 20.3020 99.9900 99.9900 15.739 15.543 12.188 1.568020 -13.376750 0.883958
2 5384595418049695616 Field104.1030623 1741039401351048892 174.413163 -39.098332 20.3235 19.4183 18.8627 17.403 17.325 12.088 0.878178 -2.315444 -1.981345
3 5199100724044211456 Field87.1011857 1656078801351008164 164.270068 -79.405031 19.8656 18.3270 17.0464 14.755 14.799 12.924 1.008309 -7.250559 3.201105
4 5467597104154887040 Field85.1028867 1577028801351048828 158.070415 -28.561233 20.3464 19.3088 18.7218 17.126 16.309 12.468 0.482538 -3.713790 0.281505
... ... ... ... ... ... ... ... ... ... ... ... ... ... ...
107 5198757405832808576 Field87.1204921 1590080301351016502 156.965653 -80.528784 21.9378 99.9900 99.9900 16.081 15.829 12.932 1.626588 2.234972 -16.577670
108 5381961366150238080 Field166.1703433 1727044001351026575 173.588279 -43.889225 17.9590 99.9900 99.9900 14.997 14.988 12.565 0.604291 -7.352319 -5.747996
109 5198195207497561088 Field106.1189889 1674080301351024353 170.834267 -80.526241 18.5196 17.9265 17.5419 16.117 16.245 12.933 0.259513 -4.197899 -1.071158
110 5458462327193883392 Field161.1718085 1516034901351031680 151.645492 -34.804156 18.4303 99.9900 99.9900 15.587 15.786 12.449 0.353570 -7.475217 5.853638
111 5459798298242774016 Field77.1011912 1493033401351048226 149.796546 -33.028633 22.3068 20.7488 18.7407 16.427 15.961 12.290 0.869519 -0.222829 2.052986

112 rows × 14 columns

Speed test

Here we compare the speed of using the q3c_join() function to crossmatch directly in a JOIN query (query1) versus using a pre-crossmatched table in a JOIN query (query2). We select objects from the two catalogs and retrieve the same specified columns for the two queries. We will see that query1 times out after 300 seconds (5 minutes) and fails to retrieve results, while query2 runs for about 60-90 seconds (1-1.5 minutes) and will retrieve the 3.6 million rows we queried for.

First, running the crossmatch ourselves:

In [22]:
%%time
query1 = """SELECT
           a.cntr as id1,a.ra as ra1,a.dec as dec1,a.pmdec,a.pmra,a.w1mpro,a.w2mpro,
           gg.specobjid as id2,gg.ra as ra2,gg.dec as dec2,gg.z,gg.class,gg.veldisp,gg.veldisperr,
           (q3c_dist(a.ra,a.dec,gg.ra,gg.dec)*3600.0) as dist_arcsec 
         FROM 
            allwise.source AS a
         INNER JOIN LATERAL (
            SELECT s.specobjid,s.ra,s.dec,s.z,s.class,s.veldisp,s.veldisperr
            FROM 
                sdss_dr16.specobj AS s
            WHERE
                q3c_join(a.ra, a.dec, s.ra, s.dec, 1.5/3600.0)
            ORDER BY
                random()
            ASC LIMIT 1
        ) as gg ON true
"""
df1 = qc.query(sql=query1,timeout=300,fmt='pandas')
df1
---------------------------------------------------------------------------
queryClientError                          Traceback (most recent call last)
<timed exec> in <module>

/data0/sw/anaconda3/lib/python3.7/site-packages/noaodatalab-2.19.0-py3.7.egg/dl/Util.py in __call__(self, *args, **kw)
     80             return function(self.obj, *args, **kw)
     81         else:
---> 82             return function(*args, **kw)
     83 
     84     def __repr__(self):

/data0/sw/anaconda3/lib/python3.7/site-packages/noaodatalab-2.19.0-py3.7.egg/dl/queryClient.py in query(token, adql, sql, fmt, out, async_, drop, profile, **kw)
    542     return qc_client._query (token=def_token(token), adql=adql, sql=sql, 
    543                              fmt=fmt, out=out, async_=async_, drop=drop, profile=profile,
--> 544                              **kw)
    545 
    546 

/data0/sw/anaconda3/lib/python3.7/site-packages/noaodatalab-2.19.0-py3.7.egg/dl/queryClient.py in _query(self, token, adql, sql, fmt, out, async_, drop, profile, **kw)
   2028         r = requests.get (dburl, headers=headers, timeout=timeout)
   2029         if r.status_code != 200:
-> 2030             raise queryClientError (r.text)
   2031         resp = qcToString(r.content)
   2032 

queryClientError: Error: QM: Query timeout at 300 sec

Now, the same but using pre-crossmatched tables:

In [23]:
%%time
query2 = """SELECT 
           X.id1,X.id2,X.ra1,X.dec1,X.ra2,X.dec2,X.distance as dist_arcsec,
           a.pmdec,a.pmra,a.w1mpro,a.w2mpro,
           s.z,s.class,s.veldisp,s.veldisperr
         FROM 
             allwise.x1p5__source__sdss_dr16__specobj as X 
         JOIN 
             allwise.source as a ON X.id1 = a.cntr 
         JOIN 
             sdss_dr16.specobj AS s ON X.id2 = s.specobjid
         """
df2 = qc.query(sql=query2,fmt='pandas')
df2
CPU times: user 14.5 s, sys: 6.39 s, total: 20.9 s
Wall time: 1min 1s
Out[23]:
id1 id2 ra1 dec1 ra2 dec2 dist_arcsec pmdec pmra w1mpro w2mpro z class veldisp veldisperr
0 1601351000041 4902396990288318464 0.609037 -2.119844 0.609022 -2.119838 0.000016 -104.0 373.0 13.899 13.602 0.198010 GALAXY 290.217 12.5112
1 1601351000062 4902396440532504576 0.608449 -2.010911 0.608431 -2.010910 0.000017 103.0 226.0 14.423 14.137 0.285261 GALAXY 250.935 15.4101
2 1601351000070 4902398089799946240 0.560765 -2.193133 0.560709 -2.193130 0.000057 355.0 524.0 14.688 14.423 0.279780 GALAXY 263.838 17.4279
3 1601351000092 4902395890776690688 0.647265 -2.032009 0.647439 -2.031991 0.000175 -150.0 -562.0 15.051 14.778 0.383525 GALAXY 179.624 22.3847
4 1601351000100 4902398914433667072 0.695043 -2.077576 0.695027 -2.077596 0.000026 -84.0 -462.0 15.104 14.949 0.441733 GALAXY 275.900 27.6191
... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ...
3620144 3101351004389 7919762298910298112 0.103030 -3.763596 0.103005 -3.763521 0.000079 1035.0 -534.0 15.243 15.087 0.774973 GALAXY 207.886 47.0690
3620145 3101351004415 7919743057456812032 359.879261 -3.713877 359.879180 -3.713804 0.000109 80.0 406.0 15.432 15.242 0.607822 GALAXY 196.345 28.5942
3620146 3101351004421 7919763398421925888 0.051025 -3.542625 0.051001 -3.542693 0.000072 207.0 326.0 15.439 15.282 0.617596 GALAXY 793.320 232.7660
3620147 3101351004423 7919765597445181440 0.134919 -3.737547 0.134903 -3.737574 0.000031 329.0 -38.0 15.426 15.233 0.519057 GALAXY 174.910 32.9914
3620148 3101351004430 8889213814119354368 359.854173 -3.659682 359.854210 -3.659661 0.000042 498.0 -762.0 15.561 14.839 3.333650 QSO 0.000 0.0000

3620149 rows × 15 columns

For completeness, we switch the order of the queries and query from a different catalog.

We again select objects from two catalogs and retrieve the same specified columns for two queries. query3 uses a pre-crossmatched table in a JOIN query and query4 crossmatches directly in the JOIN query. We will see that query3 runs for about 60-90 seconds (1-1.5 minutes) and will retrieve the 4.4 million rows we queried for, while query4 times out after 300 seconds (5 minutes) and fails to retrieve results.

First, using pre-crossmatched tables:

In [24]:
%%time
query3 = """SELECT 
           X.id1,X.id2,X.ra1,X.dec1,X.ra2,X.dec2,X.distance as dist_arcsec,
           u.mag_w1_vg,u.mag_w2_vg,s.z,s.class,s.veldisp,s.veldisperr
         FROM 
             unwise_dr1.x1p5__object__sdss_dr16__specobj as X 
         JOIN 
             unwise_dr1.object as u ON X.id1 = u.unwise_objid 
         JOIN 
             sdss_dr16.specobj AS s ON X.id2 = s.specobjid
         ORDER BY 
             random()
         """
df3 = qc.query(sql=query3,fmt='pandas')
df3
CPU times: user 19.6 s, sys: 6.26 s, total: 25.8 s
Wall time: 1min 23s
Out[24]:
id1 id2 ra1 dec1 ra2 dec2 dist_arcsec mag_w1_vg mag_w2_vg z class veldisp veldisperr
0 3433m016o0010121 -8143586336971665408 342.996910 -1.593245 342.996990 -1.593260 0.000081 17.1867 16.3136 2.258040 QSO 0.000 0.00000
1 2400p196o0077931 4425003602017538048 239.849467 20.406885 239.849290 20.406898 0.000167 16.2114 inf 0.552131 GALAXY 222.917 66.66670
2 0015m046o0023737 -7928542484209840128 1.432127 -4.836889 1.431976 -4.837249 0.000390 16.6196 15.7550 1.054150 GALAXY 332.174 88.42950
3 0347p000o0005956 1697930776189888512 34.908691 -0.430688 34.908619 -0.430583 0.000127 15.5226 15.3036 -0.000079 STAR 0.000 0.00000
4 1439p499o0003748 8217016643496464384 144.678777 49.417318 144.678740 49.417429 0.000114 15.9545 14.5466 1.786190 QSO 0.000 0.00000
... ... ... ... ... ... ... ... ... ... ... ... ... ...
4374677 2190p605o0017722 684637338227206144 219.407653 60.636003 219.408170 60.635673 0.000416 14.8213 14.0239 0.108286 GALAXY 181.974 8.47504
4374678 3281p060o0021253 4606304551007703040 327.682062 6.475771 327.682140 6.475755 0.000079 15.1298 15.0044 0.313862 GALAXY 195.650 15.60140
4374679 0211p318o0073872 8701143457736183808 21.315946 32.429631 21.315902 32.429988 0.000359 16.2019 inf 0.761887 GALAXY 280.037 77.72580
4374680 1733p257o0003136 7222880829848178688 174.066874 25.158107 174.066880 25.158167 0.000061 15.6216 15.6454 0.432321 GALAXY 171.368 27.44280
4374681 0272p000o0016857 1693480777687263232 27.615062 0.325480 27.615053 0.325464 0.000018 15.3827 15.5219 -0.000073 STAR 0.000 0.00000

4374682 rows × 13 columns

Now, running the crossmatch ourselves:

In [25]:
%%time
query4 = """SELECT
           u.unwise_objid as id1,u.ra as ra1,u.dec as dec1,u.mag_w1_vg,u.mag_w2_vg,
           ss.specobjid as id2,ss.ra as ra2,ss.dec as dec2,ss.z,ss.class,ss.veldisp,ss.veldisperr,
           (q3c_dist(u.ra,u.dec,ss.ra,ss.dec)*3600.0) as dist_arcsec 
         FROM 
            unwise_dr1.object AS u
         INNER JOIN LATERAL (
            SELECT s.specobjid,s.ra,s.dec,s.z,s.class,s.veldisp,s.veldisperr
            FROM 
                sdss_dr16.specobj AS s
            WHERE
                q3c_join(u.ra, u.dec, s.ra, s.dec, 1.5/3600.0)
            ORDER BY
                random()
            ASC LIMIT 1
        ) as ss ON true
"""
df4 = qc.query(sql=query4,fmt='pandas')
df4
---------------------------------------------------------------------------
timeout                                   Traceback (most recent call last)
/data0/sw/anaconda3/lib/python3.7/site-packages/urllib3/connectionpool.py in _make_request(self, conn, method, url, timeout, chunked, **httplib_request_kw)
    420                     # Otherwise it looks like a bug in the code.
--> 421                     six.raise_from(e, None)
    422         except (SocketTimeout, BaseSSLError, SocketError) as e:

/data0/sw/anaconda3/lib/python3.7/site-packages/urllib3/packages/six.py in raise_from(value, from_value)

/data0/sw/anaconda3/lib/python3.7/site-packages/urllib3/connectionpool.py in _make_request(self, conn, method, url, timeout, chunked, **httplib_request_kw)
    415                 try:
--> 416                     httplib_response = conn.getresponse()
    417                 except BaseException as e:

/data0/sw/anaconda3/lib/python3.7/http/client.py in getresponse(self)
   1343             try:
-> 1344                 response.begin()
   1345             except ConnectionError:

/data0/sw/anaconda3/lib/python3.7/http/client.py in begin(self)
    305         while True:
--> 306             version, status, reason = self._read_status()
    307             if status != CONTINUE:

/data0/sw/anaconda3/lib/python3.7/http/client.py in _read_status(self)
    266     def _read_status(self):
--> 267         line = str(self.fp.readline(_MAXLINE + 1), "iso-8859-1")
    268         if len(line) > _MAXLINE:

/data0/sw/anaconda3/lib/python3.7/socket.py in readinto(self, b)
    588             try:
--> 589                 return self._sock.recv_into(b)
    590             except timeout:

/data0/sw/anaconda3/lib/python3.7/site-packages/urllib3/contrib/pyopenssl.py in recv_into(self, *args, **kwargs)
    325             if not util.wait_for_read(self.socket, self.socket.gettimeout()):
--> 326                 raise timeout("The read operation timed out")
    327             else:

timeout: The read operation timed out

During handling of the above exception, another exception occurred:

ReadTimeoutError                          Traceback (most recent call last)
/data0/sw/anaconda3/lib/python3.7/site-packages/requests/adapters.py in send(self, request, stream, timeout, verify, cert, proxies)
    448                     retries=self.max_retries,
--> 449                     timeout=timeout
    450                 )

/data0/sw/anaconda3/lib/python3.7/site-packages/urllib3/connectionpool.py in urlopen(self, method, url, body, headers, retries, redirect, assert_same_host, timeout, pool_timeout, release_conn, chunked, body_pos, **response_kw)
    719             retries = retries.increment(
--> 720                 method, url, error=e, _pool=self, _stacktrace=sys.exc_info()[2]
    721             )

/data0/sw/anaconda3/lib/python3.7/site-packages/urllib3/util/retry.py in increment(self, method, url, response, error, _pool, _stacktrace)
    399             if read is False or not self._is_method_retryable(method):
--> 400                 raise six.reraise(type(error), error, _stacktrace)
    401             elif read is not None:

/data0/sw/anaconda3/lib/python3.7/site-packages/urllib3/packages/six.py in reraise(tp, value, tb)
    734                 raise value.with_traceback(tb)
--> 735             raise value
    736         finally:

/data0/sw/anaconda3/lib/python3.7/site-packages/urllib3/connectionpool.py in urlopen(self, method, url, body, headers, retries, redirect, assert_same_host, timeout, pool_timeout, release_conn, chunked, body_pos, **response_kw)
    671                 headers=headers,
--> 672                 chunked=chunked,
    673             )

/data0/sw/anaconda3/lib/python3.7/site-packages/urllib3/connectionpool.py in _make_request(self, conn, method, url, timeout, chunked, **httplib_request_kw)
    422         except (SocketTimeout, BaseSSLError, SocketError) as e:
--> 423             self._raise_timeout(err=e, url=url, timeout_value=read_timeout)
    424             raise

/data0/sw/anaconda3/lib/python3.7/site-packages/urllib3/connectionpool.py in _raise_timeout(self, err, url, timeout_value)
    330             raise ReadTimeoutError(
--> 331                 self, url, "Read timed out. (read timeout=%s)" % timeout_value
    332             )

ReadTimeoutError: HTTPSConnectionPool(host='datalab.noao.edu', port=443): Read timed out. (read timeout=300)

During handling of the above exception, another exception occurred:

ReadTimeout                               Traceback (most recent call last)
<timed exec> in <module>

/data0/sw/anaconda3/lib/python3.7/site-packages/noaodatalab-2.19.0-py3.7.egg/dl/Util.py in __call__(self, *args, **kw)
     80             return function(self.obj, *args, **kw)
     81         else:
---> 82             return function(*args, **kw)
     83 
     84     def __repr__(self):

/data0/sw/anaconda3/lib/python3.7/site-packages/noaodatalab-2.19.0-py3.7.egg/dl/queryClient.py in query(token, adql, sql, fmt, out, async_, drop, profile, **kw)
    542     return qc_client._query (token=def_token(token), adql=adql, sql=sql, 
    543                              fmt=fmt, out=out, async_=async_, drop=drop, profile=profile,
--> 544                              **kw)
    545 
    546 

/data0/sw/anaconda3/lib/python3.7/site-packages/noaodatalab-2.19.0-py3.7.egg/dl/queryClient.py in _query(self, token, adql, sql, fmt, out, async_, drop, profile, **kw)
   2026 
   2027         # If we're not streaming the request result, process it here.
-> 2028         r = requests.get (dburl, headers=headers, timeout=timeout)
   2029         if r.status_code != 200:
   2030             raise queryClientError (r.text)

/data0/sw/anaconda3/lib/python3.7/site-packages/requests/api.py in get(url, params, **kwargs)
     73 
     74     kwargs.setdefault('allow_redirects', True)
---> 75     return request('get', url, params=params, **kwargs)
     76 
     77 

/data0/sw/anaconda3/lib/python3.7/site-packages/requests/api.py in request(method, url, **kwargs)
     58     # cases, and look like a memory leak in others.
     59     with sessions.Session() as session:
---> 60         return session.request(method=method, url=url, **kwargs)
     61 
     62 

/data0/sw/anaconda3/lib/python3.7/site-packages/requests/sessions.py in request(self, method, url, params, data, headers, cookies, files, auth, timeout, allow_redirects, proxies, hooks, stream, verify, cert, json)
    531         }
    532         send_kwargs.update(settings)
--> 533         resp = self.send(prep, **send_kwargs)
    534 
    535         return resp

/data0/sw/anaconda3/lib/python3.7/site-packages/requests/sessions.py in send(self, request, **kwargs)
    644 
    645         # Send the request
--> 646         r = adapter.send(request, **kwargs)
    647 
    648         # Total elapsed time of the request (approximately)

/data0/sw/anaconda3/lib/python3.7/site-packages/requests/adapters.py in send(self, request, stream, timeout, verify, cert, proxies)
    527                 raise SSLError(e, request=request)
    528             elif isinstance(e, ReadTimeoutError):
--> 529                 raise ReadTimeout(e, request=request)
    530             else:
    531                 raise

ReadTimeout: HTTPSConnectionPool(host='datalab.noao.edu', port=443): Read timed out. (read timeout=300)

Appendix

A clear benefit of pre-crossmatched tables is that they contain the positions of the same objects in two datasets. We can use this to e.g. fetch images of an object from both surveys.

A1. unWISE DR1 vs LS DR8

Here we will compare two images of the same object from two different catalogs, unWISE DR1 and LS DR8.

Function to retrieve cutouts

In [26]:
def make_cutout_comparison_table(ra_in1, dec_in1, layer1, layer2, pixscale, ra_in2=None, dec_in2=None):
    """
    Obtain color JPEG images from Legacy Survey team cutout tool at NERSC
    """    
    img1 = []
    img2 = []
    
    for i in range(len(ra_in1)):
        cutout_url1 = "https://www.legacysurvey.org/viewer/cutout.jpg?ra=%g&dec=%g&layer=%s&pixscale=%s" % (ra_in1[i],dec_in1[i],layer1,pixscale)
        img = plt.imread(download_file(cutout_url1,cache=True,show_progress=False,timeout=120))
        img1.append(img)
        
        cutout_url2 = "https://www.legacysurvey.org/viewer/cutout.jpg?ra=%g&dec=%g&layer=%s&pixscale=%s" % (ra_in2[i],dec_in2[i],layer2,pixscale)
        img = plt.imread(download_file(cutout_url2,cache=True,show_progress=False,timeout=120))
        img2.append(img)

    return img1,img2

Function to generate plots

In [27]:
def plot_cutouts(img1,img2,cat1,cat2):
    """
    Plot images in two rows with 5 images in each row
    """
    fig = plt.figure(figsize=(21,7))

    for i in range(len(img1)):
        ax = fig.add_subplot(2,6,i+1)
        ax.imshow(img1[i])
        ax.xaxis.set_major_formatter(NullFormatter())
        ax.yaxis.set_major_formatter(NullFormatter())
        ax.tick_params(axis='both',which='both',length=0)
        ax.text(0.02,0.93,'ra=%.5f'%list_ra1[i],transform=ax.transAxes,fontsize=12,color='white')
        ax.text(0.02,0.85,'dec=%.5f'%list_dec1[i],transform=ax.transAxes,fontsize=12,color='white')
        ax.text(0.02,0.77,cat1,transform=ax.transAxes,fontsize=12,color='white')

        ax = fig.add_subplot(2,6,i+7)
        ax.imshow(img2[i])
        ax.xaxis.set_major_formatter(NullFormatter())
        ax.yaxis.set_major_formatter(NullFormatter())
        ax.tick_params(axis='both',which='both',length=0)
        ax.text(0.02,0.93,'ra=%.5f'%list_ra2[i],transform=ax.transAxes,fontsize=12,color='white')
        ax.text(0.02,0.85,'dec=%.5f'%list_dec2[i],transform=ax.transAxes,fontsize=12,color='white')
        ax.text(0.02,0.77,cat2,transform=ax.transAxes,fontsize=12,color='white')

    plt.subplots_adjust(wspace=0.02, hspace=0.03)

Write query to randomly select five targets (RA/Dec positions) from unWISE DR1 and LS DR8 crossmatch table

... then save them as arrays and set the captions, layers, and pixscale. Finally we plot the cutout images.

In [28]:
%%time
q = """SELECT ra1,dec1,ra2,dec2 
        FROM unwise_dr1.x1p5__object__ls_dr8__tractor_primary 
        WHERE ra1>300 AND dec1>33 
        ORDER BY random() 
        LIMIT 5"""

r = qc.query(sql=q,fmt='pandas')

list_ra1=r['ra1'].values       # ".values" convert to numpy array
list_dec1=r['dec1'].values
list_ra2=r['ra2'].values       
list_dec2=r['dec2'].values

cat1='unWISE'
cat2='ls dr8'
layer1='unwise-neo6'
layer2='ls-dr8'
pixscale='0.3'
img1,img2 = make_cutout_comparison_table(list_ra1,list_dec1,layer1,layer2,
                                         pixscale,list_ra2,list_dec2)
plot_cutouts(img1,img2,cat1,cat2)
CPU times: user 1.6 s, sys: 1.16 s, total: 2.77 s
Wall time: 46.3 s

A2. SDSS vs DES DR1

Here we will compare two images of the same object from two different catalogs, SDSS and DES DR1.

Write query to randomly select five targets (RA/Dec positions) from SDSS DR16 and DES DR1 crossmatch table

... then save them as arrays and set the captions, layers, and pixscale. Finally we plot the cutout images.

In [29]:
%%time
q = """SELECT ra1,dec1,ra2,dec2 
        FROM sdss_dr16.x1p5__specobj__des_dr1__main 
        ORDER BY random() 
        LIMIT 5"""

r = qc.query(sql=q,fmt='pandas')

list_ra1=r['ra1'].values       # ".values" convert to numpy array
list_dec1=r['dec1'].values
list_ra2=r['ra2'].values       
list_dec2=r['dec2'].values

cat1='sdss dr16'
cat2='des dr1'
layer1='sdss'
layer2='des-dr1'
pixscale='0.25'
img1,img2 = make_cutout_comparison_table(list_ra1,list_dec1,layer1,layer2,
                                         pixscale,list_ra2,list_dec2)
plot_cutouts(img1,img2,cat1,cat2)
CPU times: user 1.55 s, sys: 1.13 s, total: 2.68 s
Wall time: 16.7 s

A3. Cool galaxy finds: SDSS vs DES DR1

We compare two images of the same galaxy from two different catalogs, SDSS and DES DR1. We use a list of identified galaxies (RA/Dec positions) to compare the difference in observable features and quality between the two catalogs.

First we import the CSV file of identified galaxies (RA/Dec positions) into MyDB:

In [30]:
qc.mydb_import('gals','./gals.csv',drop=True)
Out[30]:
'OK'

We write the query to select the first five RA/Dec positions from our table. We then save them as arrays and set the captions, layers, and pixscale. Finally we plot the cutout images.

In [31]:
qg = """SELECT ra,dec 
        FROM mydb://gals 
        LIMIT 5"""
rg = qc.query(sql=qg)
rp = convert(rg)
list_ra1=rp['ra'].values 
list_dec1=rp['dec'].values
list_ra2=rp['ra'].values
list_dec2=rp['dec'].values

cat1='sdss dr16'
cat2='des dr1'
layer1='sdss'
layer2='des-dr1'
pixscale='0.5'

img1,img2 = make_cutout_comparison_table(list_ra1,list_dec1,layer1,layer2,
                                        pixscale,ra_in2=list_ra1,dec_in2=list_dec1)
plot_cutouts(img1,img2,cat1,cat2)

We write the next query to select the next five RA/Dec positions from our table. We then save them as arrays and set the captions, layers, and pixscale. Finally we plot the cutout images.

In [32]:
qg = """SELECT ra,dec 
        FROM mydb://gals 
        LIMIT 5 
        OFFSET 5"""
rg = qc.query(sql=qg)
rp = convert(rg)
list_ra1=rp['ra'].values      
list_dec1=rp['dec'].values
list_ra2=rp['ra'].values     
list_dec2=rp['dec'].values

img1,img2 = make_cutout_comparison_table(list_ra1,list_dec1,layer1,layer2,
                                        pixscale,ra_in2=list_ra1,dec_in2=list_dec1)
plot_cutouts(img1,img2,cat1,cat2)

We write the next query to select the last five RA/Dec positions from our table. We then save them as arrays and set the captions, layers, and pixscale. Finally we plot the cutout images.

In [33]:
qg = """SELECT ra,dec 
        FROM mydb://gals 
        LIMIT 5 
        OFFSET 10"""
rg = qc.query(sql=qg)
rp = convert(rg)
list_ra1=rp['ra'].values    
list_dec1=rp['dec'].values
list_ra2=rp['ra'].values     
list_dec2=rp['dec'].values

img1,img2 = make_cutout_comparison_table(list_ra1,list_dec1,layer1,layer2,
                                        pixscale,ra_in2=list_ra1,dec_in2=list_dec1)
plot_cutouts(img1,img2,cat1,cat2)

Resources & references